7 research outputs found
Consensus in Networks Prone to Link Failures
We consider deterministic distributed algorithms solving Consensus in
synchronous networks of arbitrary topologies. Links are prone to failures.
Agreement is understood as holding in each connected component of a network
obtained by removing faulty links. We introduce the concept of stretch, which
is a function of the number of connected components of a network and their
respective diameters. Fast and early-stopping algorithms solving Consensus are
defined by referring to stretch resulting in removing faulty links. We develop
algorithms that rely only on nodes knowing their own names and the ability to
associate communication with local ports. A network has nodes and it starts
with functional links. We give a general algorithm operating in time
that uses messages of bits. If we additionally restrict executions
to be subject to a bound on stretch, then there is a fast algorithm
solving Consensus in time using messages of bits. Let
be an unknown stretch occurring in an execution; we give an algorithm
working in time and using messages of bits. We
show that Consensus can be solved in the optimal time, but at the
cost of increasing message size to . We also demonstrate how to
solve Consensus by an algorithm that uses only non-faulty links and
works in time , while nodes start with their ports mapped to neighbors
and messages carry bits. We prove lower bounds on performance of
Consensus solutions that refer to parameters of evolving network topologies and
the knowledge available to nodes
Fast Agreement in Networks with Byzantine Nodes
We study Consensus in synchronous networks with arbitrary connected topologies. Nodes may be faulty, in the sense of either Byzantine or proneness to crashing. Let t denote a known upper bound on the number of faulty nodes, and D_s denote a maximum diameter of a network obtained by removing up to s nodes, assuming the network is (s+1)-connected. We give an algorithm for Consensus running in time t + D_{2t} with nodes subject to Byzantine faults. We show that, for any algorithm solving Consensus for Byzantine nodes, there is a network G and an execution of the algorithm on this network that takes ?(t + D_{2t}) rounds. We give an algorithm solving Consensus in t + D_{t} communication rounds with Byzantine nodes using authenticated messages of polynomial size. We show that for any numbers t and d > 4, there exists a network G and an algorithm solving Consensus with Byzantine nodes using authenticated messages in fewer than t + 3 rounds on G, but all algorithms solving Consensus without message authentication require at least t + d rounds on G. This separates Consensus with Byzantine nodes from Consensus with Byzantine nodes using message authentication, with respect to asymptotic time performance in networks of arbitrary connected topologies, which is unlike complete networks. Let f denote the number of failures actually occurring in an execution and unknown to the nodes. We develop an algorithm solving Consensus against crash failures and running in time ?(f + D_{f}), assuming only that nodes know their names and can differentiate among ports; this algorithm is also communication-efficient, by using messages of size ?(mlog n), where n is the number of nodes and m is the number of edges. We give a lower bound t+D_t-2 on the running time of any deterministic solution to Consensus in (t+1)-connected networks, if t nodes may crash
Optimal Algorithms for Free Order Multiple-Choice Secretary
Suppose we are given integer and boxes labeled
by an adversary, each containing a number chosen from an unknown distribution.
We have to choose an order to sequentially open these boxes, and each time we
open the next box in this order, we learn its number. If we reject a number in
a box, the box cannot be recalled. Our goal is to accept the largest of
these numbers, without necessarily opening all boxes. This is the free order
multiple-choice secretary problem. Free order variants were studied extensively
for the secretary and prophet problems. Kesselheim, Kleinberg, and Niazadeh KKN
(STOC'15) initiated a study of randomness-efficient algorithms (with the
cheapest order in terms of used random bits) for the free order secretary
problems.
We present an algorithm for free order multiple-choice secretary, which is
simultaneously optimal for the competitive ratio and used amount of randomness.
I.e., we construct a distribution on orders with optimal entropy
such that a deterministic multiple-threshold algorithm is
-competitive. This improves in three ways the previous
best construction by KKN, whose competitive ratio is .
Our competitive ratio is (near)optimal for the multiple-choice secretary
problem; it works for exponentially larger parameter ; and our algorithm is
a simple deterministic multiple-threshold algorithm, while that in KKN is
randomized. We also prove a corresponding lower bound on the entropy of optimal
solutions for the multiple-choice secretary problem, matching entropy of our
algorithm, where no such previous lower bound was known.
We obtain our algorithmic results with a host of new techniques, and with
these techniques we also improve significantly the previous results of KKN
about constructing entropy-optimal distributions for the classic free order
secretary
Adaptive Massively Parallel Algorithms for Cut Problems
We study the Weighted Min Cut problem in the Adaptive Massively Parallel
Computation (AMPC) model. In 2019, Behnezhad et al. [3] introduced the AMPC
model as an extension of the Massively Parallel Computation (MPC) model. In the
past decade, research on highly scalable algorithms has had significant impact
on many massive systems. The MPC model, introduced in 2010 by Karloff et al.
[16], which is an abstraction of famous practical frameworks such as MapReduce,
Hadoop, Flume, and Spark, has been at the forefront of this research. While
great strides have been taken to create highly efficient MPC algorithms for a
range of problems, recent progress has been limited by the 1-vs-2 Cycle
Conjecture [20], which postulates that the simple problem of distinguishing
between one and two cycles requires MPC rounds. In the AMPC
model, each machine has adaptive read access to a distributed hash table even
when communication is restricted (i.e., in the middle of a round). While
remaining practical [4], this gives algorithms the power to bypass limitations
like the 1-vs-2 Cycle Conjecture.
We give the first sublogarithmic AMPC algorithm, requiring
rounds, for -approximate weighted Min Cut. Our algorithm is
inspired by the divide and conquer approach of Ghaffari and Nowicki [11], which
solves the -approximate weighted Min Cut problem in rounds of MPC using the classic result of Karger and Stein [15].
Our work is fully-scalable in the sense that the local memory of each machine
is for any constant . There are no -round MPC algorithms for Min Cut in this memory regime assuming the 1-vs-2
Cycle Conjecture holds. The exponential speedup in AMPC is the result of
decoupling the different layers of the divide and conquer algorithm and solving
all layers in rounds